Variance reduction in stochastic homogenization: proof of concept, using antithetic variables

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variance Reduction in Stochastic Homogenization Using Antithetic Variables

Some theoretical issues related to the problem of variance reduction in numerical approaches for stochastic homogenization are examined. On some simple, yet representative cases, it is demonstrated theoretically that a technique based on antithetic variables can indeed reduce the variance of the output of the computation, and decrease the overall computational cost of such a multiscale problem....

متن کامل

Variance reduction for antithetic integral control of stochastic reaction networks

The antithetic integral feedback motif recently introduced in [6] is known to ensure robust perfect adaptation for the mean dynamics of a given molecular species involved in a complex stochastic biomolecular reaction network. However, it was observed that it also leads to a higher variance in the controlled network than that obtained when using a constitutive (i.e. open-loop) control strategy. ...

متن کامل

Variance Reduction via Antithetic Markov Chains

We present a Monte Carlo integration method, antithetic Markov chain sampling (AMCS), that incorporates local Markov transitions in an underlying importance sampler. Like sequential Monte Carlo sampling, the proposed method uses a sequence of Markov transitions to guide the sampling toward influential regions of the integrand (modes). However, AMCS differs in the type of transitions that may be...

متن کامل

Variance Reduction in Stochastic Gradient Langevin Dynamics

Stochastic gradient-based Monte Carlo methods such as stochastic gradient Langevin dynamics are useful tools for posterior inference on large scale datasets in many machine learning applications. These methods scale to large datasets by using noisy gradients calculated using a mini-batch or subset of the dataset. However, the high variance inherent in these noisy gradients degrades performance ...

متن کامل

Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast conv...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SeMA Journal

سال: 2010

ISSN: 1575-9822,2254-3902

DOI: 10.1007/bf03322539